421 research outputs found
Dithiolopyrrolone Natural Products : Isolation, Synthesis and Biosynthesis
Peer reviewedPublisher PD
The Relationships Between CG, BFGS, and Two Limited-memory Algorithms
For the solution of linear systems, the conjugate gradient (CG) and BFGS are among the most popular and successful algorithms with their respective advantages. The limited-memory methods have been developed to combine the best of the two. We describe and examine CG, BFGS, and two limited-memory methods (L-BFGS and VSCG) in the context of linear systems. We focus on the relationships between each of the four algorithms, and we present numerical results to illustrate those relationships
Spatio-temporal Incentives Optimization for Ride-hailing Services with Offline Deep Reinforcement Learning
A fundamental question in any peer-to-peer ride-sharing system is how to,
both effectively and efficiently, meet the request of passengers to balance the
supply and demand in real time. On the passenger side, traditional approaches
focus on pricing strategies by increasing the probability of users' call to
adjust the distribution of demand. However, previous methods do not take into
account the impact of changes in strategy on future supply and demand changes,
which means drivers are repositioned to different destinations due to
passengers' calls, which will affect the driver's income for a period of time
in the future. Motivated by this observation, we make an attempt to optimize
the distribution of demand to handle this problem by learning the long-term
spatio-temporal values as a guideline for pricing strategy. In this study, we
propose an offline deep reinforcement learning based method focusing on the
demand side to improve the utilization of transportation resources and customer
satisfaction. We adopt a spatio-temporal learning method to learn the value of
different time and location, then incentivize the ride requests of passengers
to adjust the distribution of demand to balance the supply and demand in the
system. In particular, we model the problem as a Markov Decision Process (MDP)
How Rumors Spread and Stop over Social Media: a Multi-Layered Communication Model and Empirical Analysis
In this paper, we present a multi-layered communication (MLC) model that includes a trust-constructing procedure that can be used to explain how rumors spread and stop over social media. We define two structures in our MLC model: the social structure (SS) in the social layer, and the communication structure (CS) in the communicating layer. We propose two trust-building mechanisms (TBM): the social-based TBM (SBTBM) and the communicating-aimed TBM (CATBM). We discuss the trust-constructing procedure to demonstrate that an individual will sequentially decide to spread information based on three factors: the opinion environment, the individual’s social influence, and the cost to confirm the information. The model predicts that individuals will tend to create links with others in social layers to extend their social structures (social clustering principle) when they use social media. Thus, a rumor will spread because a spreading core is formed in the CS. However, a rumor will be stopped by interactions that occur in the SS. Our empirical case supports this prediction. We analyzed the topology of CS to indicate how a spreading core forms and CS evolves, and how a rumor stops spreading because social behaviors in SS encourage the development of more accurate information based on reality
Recommended from our members
Optimization Algorithms for Structured Machine Learning and Image Processing Problems
Optimization algorithms are often the solution engine for machine learning and image processing techniques, but they can also become the bottleneck in applying these techniques if they are unable to cope with the size of the data. With the rapid advancement of modern technology, data of unprecedented size has become more and more available, and there is an increasing demand to process and interpret the data. Traditional optimization methods, such as the interior-point method, can solve a wide array of problems arising from the machine learning domain, but it is also this generality that often prevents them from dealing with large data efficiently. Hence, specialized algorithms that can readily take advantage of the problem structure are highly desirable and of immediate practical interest. This thesis focuses on developing efficient optimization algorithms for machine learning and image processing problems of diverse types, including supervised learning (e.g., the group lasso), unsupervised learning (e.g., robust tensor decompositions), and total-variation image denoising. These algorithms are of wide interest to the optimization, machine learning, and image processing communities. Specifically, (i) we present two algorithms to solve the Group Lasso problem. First, we propose a general version of the Block Coordinate Descent (BCD) algorithm for the Group Lasso that employs an efficient approach for optimizing each subproblem exactly. We show that it exhibits excellent performance when the groups are of moderate size. For groups of large size, we propose an extension of the proximal gradient algorithm based on variable step-lengths that can be viewed as a simplified version of BCD. By combining the two approaches we obtain an implementation that is very competitive and often outperforms other state-of-the-art approaches for this problem. We show how these methods fit into the globally convergent general block coordinate gradient descent framework in (Tseng and Yun, 2009). We also show that the proposed approach is more efficient in practice than the one implemented in (Tseng and Yun, 2009). In addition, we apply our algorithms to the Multiple Measurement Vector (MMV) recovery problem, which can be viewed as a special case of the Group Lasso problem, and compare their performance to other methods in this particular instance; (ii) we further investigate sparse linear models with two commonly adopted general sparsity-inducing regularization terms, the overlapping Group Lasso penalty l1/l2-norm and the l1/l_infty-norm. We propose a unified framework based on the augmented Lagrangian method, under which problems with both types of regularization and their variants can be efficiently solved. As one of the core building-blocks of this framework, we develop new algorithms using a partial-linearization/splitting technique and prove that the accelerated versions of these algorithms require $O(1 sqrt(epsilon) ) iterations to obtain an epsilon-optimal solution. We compare the performance of these algorithms against that of the alternating direction augmented Lagrangian and FISTA methods on a collection of data sets and apply them to two real-world problems to compare the relative merits of the two norms; (iii) we study the problem of robust low-rank tensor recovery in a convex optimization framework, drawing upon recent advances in robust Principal Component Analysis and tensor completion. We propose tailored optimization algorithms with global convergence guarantees for solving both the constrained and the Lagrangian formulations of the problem. These algorithms are based on the highly efficient alternating direction augmented Lagrangian and accelerated proximal gradient methods. We also propose a nonconvex model that can often improve the recovery results from the convex models. We investigate the empirical recoverability properties of the convex and nonconvex formulations and compare the computational performance of the algorithms on simulated data. We demonstrate through a number of real applications the practical effectiveness of this convex optimization framework for robust low-rank tensor recovery; (iv) we consider the image denoising problem using total variation regularization. This problem is computationally challenging to solve due to the non-differentiability and non-linearity of the regularization term. We propose a new alternating direction augmented Lagrangian method, involving subproblems that can be solved efficiently and exactly. The global convergence of the new algorithm is established for the anisotropic total variation model. We compare our method with the split Bregman method and demonstrate the superiority of our method in computational performance on a set of standard test images
- …